# Transfer learning optimization
Whisper Medium Oswald
Apache-2.0
Haitian Creole speech recognition model fine-tuned based on OpenAI Whisper-medium, focusing on high-accuracy transcription
Speech Recognition
Transformers Other

W
jsbeaudry
102
1
Vit Base Beans
Apache-2.0
An image classification model based on Google Vision Transformer (ViT) architecture, specifically fine-tuned for the beans dataset
Image Classification
Transformers

V
HieuVo
49
1
Zoobot Encoder Euclid
Apache-2.0
An image classification model built based on the timm library, with zoobot-encoder-convnext_nano as the base model, capable of image classification.
Image Classification
Z
mwalmsley
206
1
Vit B 16 Aion400m E32 1finetuned 1
MIT
Vision Transformer model based on OpenCLIP framework, fine-tuned for zero-shot image classification tasks
Image Classification
V
Albe-njupt
18
1
Segformerb5 Finetuned Largerimages
Other
An image segmentation model based on the SegFormer-B5 architecture, fine-tuned on larger-sized image datasets, excels at distinguishing between background and branch regions
Image Segmentation
Transformers

S
JCAI2000
14
0
Deit Tiny Patch16 224 Finetuned Main Gpu 20e Final
Apache-2.0
Lightweight image classification model based on DeiT-tiny architecture, achieving 98.56% validation accuracy after fine-tuning on a custom image dataset
Image Classification
Transformers

D
Gokulapriyan
15
0
Convnext Tiny 224 Finetuned Aiornot
Apache-2.0
A computer vision model based on ConvNeXt-Tiny architecture, fine-tuned on specific datasets for image classification tasks
Image Classification
Transformers

C
kanak8278
16
0
Swin Tiny Patch4 Window7 224 Finetuned Main Gpu 20e Final
Apache-2.0
A fine-tuned image classification model based on the Swin Transformer architecture, achieving 99.17% validation accuracy on the image folder dataset
Image Classification
Transformers

S
Gokulapriyan
16
0
Vit Base Patch16 224 Finetuned Flower
Apache-2.0
Vision Transformer model fine-tuned on flower image dataset based on Google's ViT model
Image Classification
Transformers

V
fzaghloul
35
0
Deit Tiny Patch16 224 Finetuned Og Dataset 10e
Apache-2.0
A lightweight image classification model based on the DeiT-tiny architecture, achieving 94.8% accuracy after fine-tuning on a custom image dataset
Image Classification
Transformers

D
Gokulapriyan
17
0
Swinv2 Tiny Patch4 Window8 256 Finetuned Og Dataset 10e Finetuned Og Dataset 10e
Apache-2.0
A lightweight image classification model based on SwinV2 architecture, fine-tuned on an image folder dataset with 97.83% accuracy
Image Classification
Transformers

S
Gokulapriyan
17
0
Swin Tiny Patch4 Window7 224 Finetuned Og Dataset 10e Finetuned Og Dataset 10e
Apache-2.0
This is a vision model based on the Swin Transformer architecture, which performs excellently in image classification tasks after fine-tuning.
Image Classification
Transformers

S
Gokulapriyan
15
0
Swinv2 Tiny Patch4 Window8 256 Finetuned Og Dataset 5e
Apache-2.0
This model is a fine-tuned image classification model based on the Swin Transformer V2 Tiny architecture, achieving 96.35% accuracy on the evaluation set.
Image Classification
Transformers

S
Gokulapriyan
15
0
Beitv2 Martin
Apache-2.0
This model is a fine-tuned version based on microsoft/beit-base-patch16-224-pt22k-ft22k, with specific usage and training data not explicitly stated.
Image Classification
Transformers

B
molsen
17
0
Swin Tiny Patch4 Window7 224 Finetuned Aiornot Baseline
Apache-2.0
A vision model based on the Swin Transformer Tiny architecture, fine-tuned on an unknown dataset for image classification tasks
Image Classification
Transformers

S
Thabet
17
0
Vit Base Patch16 224 In21k Fog Or Smog Classification
Apache-2.0
Image classification model fine-tuned based on google/vit-base-patch16-224-in21k, achieving 91% accuracy on the test set
Image Classification
Transformers

V
uisikdag
19
0
Swin Tiny Patch4 Window7 224 Finetuned Fluro Cls
Apache-2.0
Fine-tuned model based on Swin Transformer Tiny architecture for image classification tasks
Image Classification
Transformers

S
zlgao
19
0
Vit Base Patch16 224 In21k Lcbsi
Apache-2.0
A fine-tuned model based on Google Vision Transformer (ViT) architecture, suitable for image classification tasks
Image Classification
Transformers

V
polejowska
33
0
Swin Base Patch4 Window7 224 In22k Finetuned Cifar10
Apache-2.0
This model is an image classification model based on the Swin Transformer architecture, achieving 98.9% accuracy after fine-tuning on the CIFAR-10 dataset.
Image Classification
Transformers

S
Weili
19
0
Bert Base Multilingual Cased Sv2
Apache-2.0
A multilingual QA model fine-tuned on the SQuAD v2 dataset based on bert-base-multilingual-cased
Question Answering System
Transformers

B
monakth
13
0
Beit Base Land Cover V0.1
Apache-2.0
An image classification model based on the BEiT architecture, fine-tuned on an image folder dataset with an accuracy rate of 98.7%
Image Classification
Transformers

B
dfurman
22
0
Beit Base Patch16 224 Pt22k Ft22k Finetuned Mnist
Apache-2.0
Vision Transformer model based on BEiT architecture, fine-tuned on MNIST handwritten digit dataset, achieving 99.35% accuracy
Image Classification
Transformers

B
Karelito00
19
0
Cvt 13 384 In22k FV Finetuned Memes
Apache-2.0
An image classification model fine-tuned on the image folder dataset based on microsoft/cvt-13-384-22k, achieving an accuracy of 83.46% on the evaluation set
Image Classification
Transformers

C
jayanta
11
0
Convnext Tiny 224 Finetuned
Apache-2.0
This model is a fine-tuned version based on facebook/convnext-tiny-224, primarily used for image classification tasks, demonstrating excellent performance on the evaluation set.
Image Classification
Transformers

C
ImageIN
15
0
Vc Bantai Vit Withoutambi Adunest V1
Apache-2.0
High-precision image classification model fine-tuned based on Google's ViT-base model, achieving 91.81% accuracy on the evaluation set
Image Classification
Transformers

V
AykeeSalazar
28
0
Gptuz
Apache-2.0
GPTuz is an advanced Uzbek language model based on the GPT-2 small model, trained through transfer learning and fine-tuning techniques.
Large Language Model
Transformers Other

G
rifkat
42
2
Vit Cifar100
Apache-2.0
An image classification model fine-tuned on the Cifar100 dataset based on Google's ViT base model, achieving an accuracy of 89.85%
Image Classification
Transformers

V
Ahmed9275
920
4
Beit Finetuned
Apache-2.0
This model is a fine-tuned BEiT base version on the CIFAR-10 dataset, focusing on image classification tasks, achieving 99.18% accuracy on the evaluation set.
Image Classification
Transformers

B
jadohu
24
1
Vit Base Patch16 224 Finetuned Eurosat
Apache-2.0
A fine-tuned version of Google's ViT model on an image folder dataset for image classification tasks, achieving an accuracy of 90.72%.
Image Classification
Transformers

V
zzzzzzttt
31
0
Van Base Finetuned Eurosat Imgaug
Apache-2.0
An image classification model fine-tuned on an image folder dataset based on Visual-Attention-Network/van-base, achieving an accuracy of 98.85%
Image Classification
Transformers Other

V
nielsr
14
0
Swin Tiny Patch4 Window7 224 Finetuned Cifar10
Apache-2.0
This model is an image classification model fine-tuned on the CIFAR-10 dataset based on the Swin Transformer Tiny architecture, achieving an accuracy of 97.89%.
Image Classification
Transformers

S
nielsr
26
0
Distilbert Base Uncased Finetuned Mi
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on an unspecified dataset, primarily used for text-related tasks.
Large Language Model
Transformers

D
yancong
26
1
T5 Base Japanese
A T5 (Text-to-Text Transfer Transformer) model pretrained on Japanese corpus, suitable for various text generation tasks.
Large Language Model Japanese
T
sonoisa
13.85k
49
Indo Roberta Indonli
MIT
Indonesian natural language inference classifier based on the Indo-roberta model, trained using the IndoNLI dataset
Text Classification
Transformers Other

I
StevenLimcorn
34
1
Flyswot Test
Apache-2.0
Image classification model based on ConvNeXt architecture, fine-tuned on image folder dataset
Image Classification
Transformers

F
davanstrien
23
0
Gpt2 Small Spanish
Apache-2.0
Spanish language model based on GPT-2 small architecture, fine-tuned on Spanish Wikipedia through transfer learning
Large Language Model Spanish
G
datificate
13.14k
30
Quran Speech Recognizer
This model is a transfer learning-based Arabic speech recognition system specifically designed to identify Quran recitations and locate corresponding chapters.
Speech Recognition
Transformers

Q
Nuwaisir
123
9
T5 3b
Apache-2.0
T5-3B is a 3-billion-parameter text-to-text transfer Transformer model developed by Google, employing a unified text-to-text framework to handle various NLP tasks.
Large Language Model
Transformers Supports Multiple Languages

T
google-t5
340.75k
46
Code Trans T5 Large Source Code Summarization Sql Transfer Learning Finetune
A SQL code summarization model based on the T5-large architecture, optimized through transfer learning and fine-tuning, specifically designed for generating functional descriptions of SQL code
Text Generation
Transformers

C
SEBIS
19
0
Gpt2 Small Dutch
A Dutch-specific model developed based on OpenAI's small GPT-2 model, optimized for Dutch language processing by retraining the word embedding layer and fine-tuning
Large Language Model Other
G
GroNLP
945
5
Featured Recommended AI Models